Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix

Related Vulnerabilities: CVE-2022-24785  

概述

Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix

类型/严重性

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

标题

An update for ceph, cephadm-ansible, ceph-iscsi, python-dataclasses, and python-werkzeug is now available for Red Hat Ceph Storage 5.3.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

描述

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • Moment.js: Path traversal in moment.locale (CVE-2022-24785)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

This update also fixes several bugs and adds various enhancements. Documentation for these changes is available from the Release Notes document linked to in the References section.

Bug Fix(es)

These new packages include numerous bug fixes and enhancements. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.3/html/release_notes/index

All users of Red Hat Ceph Storage are advised to upgrade to these updated packages that provide numerous enhancements and bug fixes.

解决方案

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

受影响的产品

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage (OSD) 5 for RHEL 8 x86_64
  • Red Hat Ceph Storage (MON) 5 for RHEL 8 x86_64

修复

  • BZ - 1749627 - RGW Multi site: 'radosgw-admin sync status' is hung on secondary when one of RGW process is down on primary
  • BZ - 1827519 - [RGW MS]: Data is not synced and 'radosgw-admin sync status' shows behind the shards but 'bucket sync status' shows completed
  • BZ - 1905785 - [RGW MS - MultiSite] : slow data sync in RGW MS scale cluster.
  • BZ - 1941668 - [GSS][RGW] Buckets out of sync in a Multi-site environment
  • BZ - 1957088 - [RGW] Suspending bucket versioning in primary/secondary zone also suspends bucket versioning in the archive zone
  • BZ - 1986826 - [rgw-multisite][swift-cosbench]: Size in index not reliably updated on object overwrite, leading to ambiguity in stats on primary and secondary.
  • BZ - 1989527 - RBD: `rbd info` cmd on rbd images on which flattening is in progress throws ErrImageNotFound
  • BZ - 2011686 - Rados gateway replication slow in multisite setup
  • BZ - 2014330 - [CEE][RGW][Kafka] Failed to send bucket notifications to Kafka with ssl
  • BZ - 2015028 - rgw-multisite/dynamic resharding: Objects not synced if dynamic reshard happens on both sites while sync is happening in multisite.
  • BZ - 2017660 - [cee/sd][RGW] Multisite setup buckets bilogs are not trimmed automatically for RGW Multi-tenant buckets and require manual trim
  • BZ - 2019870 - [cee/sd][rgw][rfe] add method to modify role max_session_duration for existing role
  • BZ - 2021009 - [RGW] data sync stuck for buckets even after running bucket sync run (sometimes need to run this command multiple times)
  • BZ - 2023164 - [RGW] Multisite data sync stuck for some buckets and needs bucket sync run to sync bucket
  • BZ - 2023552 - [CEE][ceph-dashboard] Fix RBD scalability issues
  • BZ - 2024308 - Internal reproducer - one of the buckets sync stuck - 8 objects not synced
  • BZ - 2025932 - [RFE] Give more verbose information when bucket sync run command is running and also on completion
  • BZ - 2026101 - [rgw/multisite-reshard]: radosgw-admin sync status reports 56 recovering shards on the slave after converting single site to multi-site.
  • BZ - 2026282 - [rgw/multisite-reshard]: Buckets on the slave site not resharded dynamically after converting single site to multi-site.
  • BZ - 2028220 - [RFE] Ill-formatted JSON response from RGW
  • BZ - 2037041 - [Workload-DFG][RHCS 5.x] [MS][DBR] RGW multisite sync slow/stuck
  • BZ - 2041692 - [4.3][multi-realm/multi-site] [ceph-ansible]: On converting a multi-realm single site environment to multisite, full sync does not happen for the second realm
  • BZ - 2042394 - [dynamic-resharding]: After enabling the resharding feature, resharding did not happen for buckets that had fill_status over 100%
  • BZ - 2052516 - [rgw-multisite] Incorrect reporting of recovering shards in sync status
  • BZ - 2052916 - [4.3][rgw-resharding]: Large omap object found in the buckets.index pool, corresponding to resharded buckets that did not completely sync.
  • BZ - 2055137 - [GSS][RGW] Custom 'Credentials Provider' fails when dealing with JWT tokens above certain size
  • BZ - 2062794 - [RFE] RBD Encryption support does not support clones [5.3]
  • BZ - 2064481 - Mails sent by Manager Alerts going into Spam as module lack message ID and date
  • BZ - 2066453 - [RGW MS] Bucket Sync run handle multiple data generations
  • BZ - 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
  • BZ - 2072510 - [RFE][Please, add metadata information for CephFS related snapshots]
  • BZ - 2072690 - [RGW] [LC] : One delete-marker for object(myobjects9980) is not deleting in all 6 versioned buckets
  • BZ - 2075214 - [RFE] Dynamic Bucket Resharding(DBR) support in RGW Multisite
  • BZ - 2086441 - [RFE] add rgw_curl_tcp_keepalive option for http client requests
  • BZ - 2086471 - resume pending snapshot replayer shut down when an error is encountered
  • BZ - 2089220 - [rfe] translate radosgw-admin 2002 to ENOENT (POSIX); was: radosgw-admin bucket stats returns Unknown error 2002
  • BZ - 2091773 - (RHCS 5.3) [GSS] "deep-scrub starts" message missing in RHCS 5.1
  • BZ - 2095062 - [RADOS] MGR daemon crashed saying - FAILED ceph_assert(pending_service_map.epoch > service_map.epoch)
  • BZ - 2095670 - [GSS][rgw-multisite] Reshard buckets in archive zone
  • BZ - 2100553 - [cee/sd][Cephadm] 5.1 `ceph orch upgrade` adds the prefix "docker.io" to image in the disconnected environment
  • BZ - 2100602 - [Workload-DFG] radosgw-admin bucket stats command fails
  • BZ - 2101807 - [upgrade][5.0z4 to 5.3]: rgws went down and not coming up on upgrading from 5.0z4 to 5.3
  • BZ - 2102934 - [cephfs][snap_schedule] Adding retention is not working as expected
  • BZ - 2104835 - [RGW-MS][Scale]: Sync inconsistencies seen with a 20M objects bi-directional sync run.
  • BZ - 2105251 - [Dashboard][Security] Updated grafana password not reflecting in CLI command
  • BZ - 2105309 - [5.3-STAGING][RGW-MS]: bucket sync status reports few shards behind, even though the data is consistent
  • BZ - 2105324 - [5.3-STAGING][RGW-MS]: Continuous writes and deletes on 2 buckets, puts bucket sync behind few shards causing sync inconsistencies
  • BZ - 2107405 - [RHCS 5.3] removing snapshots created in nautilus after upgrading to pacific leaves clones around
  • BZ - 2108394 - rgw: object lock not enforced in some circumstances
  • BZ - 2108707 - [Workload-DFG][RHCS 5.2 ][Greenfield] [SingleSite] put workload fails to update bucket index at a rate of around 0.05% (5,000 per 10,000,000)
  • BZ - 2108886 - rgw: sync policy (per-bucket sync) breakage
  • BZ - 2109256 - Crash on malformed bucket URL
  • BZ - 2109675 - [cee][rgw] http response codes 404/504 seen along with latency over 60 seconds during PUT operations
  • BZ - 2109886 - [RADOS] Two OSDs are not coming up after rebooting entire cluster
  • BZ - 2109935 - [RFE] [rbd-mirror] : mirror image promote : error message can be tuned when demotion is not completely propagate
  • BZ - 2110008 - Ceph msgr should log when it reaches the DispatchQueue throttle limit
  • BZ - 2110338 - [5.3-STAGING][RGW-MS]: radosgw-admin sync status reports behind shards even when data is consistent.
  • BZ - 2110865 - [5.3-STAGING][RGW-MS]: metadata sync status is stuck and reports 4 shards behind although metadata is consistent.
  • BZ - 2111488 - [5.3-STAGING][RGW-MS]: Data sync stuck for buckets resharded to 1999 shards.
  • BZ - 2114607 - [Cephadm][A change in MON configuration doesn't update the contents of OSD config files accordingly]
  • BZ - 2117313 - 'data sync init' development for resharding with multisite
  • BZ - 2117672 - [cee/sd][RGW][rolling_upgrade.yml playbook showing error as "rgw_realm not defined"]
  • BZ - 2118295 - rgwlc: fix lc head marker point to non-exist lc entry
  • BZ - 2118798 - build: permit building with more recent cython
  • BZ - 2119256 - [Ceph-mgr] Module 'restful' has failed dependency: No module named 'dataclasses'
  • BZ - 2119449 - [RGW][MS]: seg fault on thread_name:radosgw-admin on executing incomplete 'sync group flow create ' command
  • BZ - 2119774 - [cee/sd][ceph-volume][RHCS 5.1]ceph-volume failed to deploy the OSD due to incorrect block_db_size
  • BZ - 2119853 - [RHCS 5.3] dups.size logging + COT dups trim command + online dups trimming fix
  • BZ - 2120187 - Upgrade from RHCS 4x to RHCS 5.3 fails with error - 'template error while templating string: no filter named ''ansible.utils.ipwrap''
  • BZ - 2120262 - [5.3][RGW-MS]: rgws segfaults on the primary site in thread_name:data-sync
  • BZ - 2121462 - [iscsi] [upgrade] [4.x to 5.x] [rhel-8] : target-api and target-gw services are stuck in AttributeError: 'Request' object has no attribute 'is_xhr' : upgrade failed saying Error EINVAL: iscsi REST API failed request with status code 500'
  • BZ - 2121489 - [GSS][rgw-multisite] Slow multisite replication
  • BZ - 2121548 - 5.3 upgrade fix: restore backward-compatible encoding of cls_rgw_bucket_instance_entry
  • BZ - 2121673 - [upgrade][5.1 to 5.3]: cephadm should not block upgrading to 5.3 with an error message related to existing rgw issues.
  • BZ - 2122130 - use actual monitor addresses when creating a peer bootstrap token
  • BZ - 2123335 - swift authentication fails with a "some" chance
  • BZ - 2123423 - code change: avoid use-after-move issues identified by Coverity scan
  • BZ - 2124423 - Observing multiple RGW crashes on upgraded multisite cluster
  • BZ - 2126787 - [5.3][RGW]: Getting WARNING: unable to find head object data pool during "radosgw-admin bucket list " operation
  • BZ - 2127319 - [Cephadm] - during mgr/mon upgrade cephadm is removing rgw.rgws daemon and key. (5 rgws out of 6 were removed)
  • BZ - 2128194 - [RGW][After rotating the ceph logs, the opslog file doesn't get automatically recreated]
  • BZ - 2129718 - [5.3][RGW-MS]: In a multisite, if data and bucket sync status reports are inconsistent, writing further data can lead to slow sync or sync stall behavior.
  • BZ - 2130116 - standby-replay mds is removed from MDSMap unexpectedly
  • BZ - 2131932 - [RGW-MS] RGW multisite sync is stuck after 4.3z1 to 5.3 upgrade
  • BZ - 2132481 - [cee/sd][rgw] swift bulkupload is failing with error 'bulk_upload cannot read_policy() for bucket'
  • BZ - 2135334 - [RGW]: archive zone: bucket index entries of deleted objects in versioned buckets are left behind
  • BZ - 2136551 - MON nodes subnet is different to RGW nodes subnet, results in Ceph-Ansible upgrade failing on task "set_fact _radosgw_address to radosgw_address_block ipv4"
  • BZ - 2138791 - [5.3][RGW]: Slow object expiration observed with LC
  • BZ - 2139258 - [cee/sd][rgw][crash] RGW instance getting crashed while performing the trim operation on non-existing bucket.
  • BZ - 2139422 - [RGW][MS]: Crash seen on malformed bucket URL
  • BZ - 2140569 - cephadm adopt fails with error file not found at TASK [get remote user keyring] in a cluster with RBD mirroring enabled
  • BZ - 2142141 - Monitor crash - ceph_assert(m < ranks.size()) - observed when number of monitors were reduced from 5 to 3 using ceph orchestrator
  • BZ - 2142174 - mon/Elector: notify_rank_removed erase rank from both live_pinging and dead_pinging sets for highest ranked MON
  • BZ - 2142674 - Ceph unresponsive after provoking failure in datacenter, no IO. Stretch Cluster internal mode.
  • BZ - 2143336 - The sync status indicates that "the data is caught up with source" but not all objects are synced
  • BZ - 2145022 - rgw: reports of put ops recreating former bucket index objects after resharding
  • BZ - 2149653 - [Ceph-Dashboard] Allow CORS if the origin ip is known
  • BZ - 2150968 - rbd: Storage is not reclaimed after persistentvolumeclaim and job that utilized it are deleted [5.3]
  • BZ - 2153781 - [cee][rgw] http response codes 404/504 seen along with latency over 60 seconds during PUT operations
  • BZ - 2156705 - [5.3][multisite/reshard][scale]: sync stuck with manual resharding at around 250M objects